#LLM tool discovery protocol
Explore tagged Tumblr posts
digitalmore · 3 months ago
Text
0 notes
govindhtech · 2 years ago
Text
Web3’s generative AI platform on Google Cloud
Tumblr media
What is Generative AI for Web3?
The public’s interest has been drawn to generative AI (gen AI). Large language models (LLMs) have appeared in the news seemingly every day this year. The excitement may be shifting from cryptocurrencies to next-generation artificial intelligence, but the academic community is closely examining the convergence of blockchain and AI, and the Web3 community hasn’t been afraid to explore. For instance:
The Machine Learning and Blockchain Research Summit was held at Coinbase.
In order to overcome adversarial issues, the decentralized AI platform SAKSHI contains a proofs layer.
FalconX made Satoshi available to help cryptocurrency traders.
Code Reader, a tool from Etherscan that teaches about smart contract code.
A plugin providing real-time data and actual transactions, like as token transfers, was released by Solana Labs.
In this blog post, Google want to talk about what the community is creating in order to move beyond experiments. They will go over some of the digital assets and Web3 use cases that they are assisting our customers with implementing in more detail.
Document search and synthesis use case
Searching for and synthesizing documents is the first use case. This use case focuses on explanations, summaries, and identification. It makes it possible for non-technical (i.e., commercial) users to learn from big document collections. For instance, gen AI can assist in summarizing a blockchain protocol’s whitepaper or assist in identifying flaws in a description of a smart contract. Numerous customers choose these applications as their initial use cases for assessing LLMs, allowing internal users access to both private and public data. This use case is facilitated by Vertex AI Search, which makes it simple to construct generative AI search engines quickly.
Customer service support and enhanced virtual assistants use case
The following use case involves improved virtual assistants and customer service support. This is a client-side application. Conversational AI is the next step in the development of virtual agents, helpers, and bots. They go one step further and enable more natural customer interaction with LLM-powered virtual agents, which includes giving the agent the ability to carry out transactions. You can create conversational AI with cutting-edge virtual agents using Vertex AI Conversation and DialogFlow CX.
Content discovery use case
Another crucial use case involves content discovery. The example for Web3 is what Google call research on cryptocurrency trading. Analysts might begin the day by reviewing the news stories that affected their portfolio of cryptocurrencies. They may hunker down and concentrate intensely on what counts. Consider how this Vertex AI Search demo might be used in cryptocurrency trading, for instance. That’s not all, though. Analysts can trade and create an algorithmic crypto trading strategy based on what they discover.
The developer efficiency use case
Google then enter the developer efficiency use case at this point. The most popular use case for Web3 is probably AI-assisted development in this context. For instance, AI-assisted development can aid programmers in navigating the challenges of writing smart contracts in Solidity. Similar to how Google Cloud’s Codey or other foundation models, such as Duet AI for Google Cloud, which is accessible in the Integrated Development Environment (IDE) and code editor, may be used to generate documentation for APIs, a Web3 infrastructure provider can do the same. Top of mind are four things:
1- Code completion inline
2- The creation of boilerplate code
3 – Code explanation where chatbots can respond to questions about codes
4- Code security checkpoints, including suggested solutions for risky dependencies.
LLM orchestration and routing use case
The more sophisticated use case of LLM orchestration and routing is one that Google have seen their customers develop. The majority of businesses will utilize various LLMs for various use cases. Consider each of the use cases this article mentioned. They won’t all be implemented using the same huge language model. How can you choose the best model to activate based on a certain user inquiry (i.e., prompt)? To do this, we demonstrated using LangChain’s RouterChain and Google’s text-bison model.
The creative use cases
The creative use cases for Web3, notably generative AI for images and videos, are last but certainly not least. Google  now enter the world of NFTs. Google’s Imagen foundation model is made available through Vertex AI. For instance, let’s say you have an NFT and would like to produce fresh photos in its design or new images or films that feature your NFT as their subject. Digital watermarking and verification are now available in Imagen (experimental), enabling you to:
Create fresh images from a text prompt.
Image editing using a text prompt
Create a description for a picture
Adapt a model to a particular topic
That was a quick overview of some AI use cases for the Web 3. The use cases mentioned above can be built by Web3 and other developers using generative AI on Google Cloud.
0 notes
jcmarchi · 2 years ago
Text
AI Can Write Wedding Toast. But What Happens When It’s Asked to Build a Bomb? - Technology Org
New Post has been published on https://thedigitalinsider.com/ai-can-write-wedding-toast-but-what-happens-when-its-asked-to-build-a-bomb-technology-org/
AI Can Write Wedding Toast. But What Happens When It’s Asked to Build a Bomb? - Technology Org
During the past year, large language models (LLMs) AIs have become incredibly adept at generating, synthesizing information and producing humanlike outputs.
LLMs are likened to digital librarians, as they have been trained on vast datasets sourced directly from the internet and can therefore generate or summarize text on nearly any topic. As a result, these LLMs have become ubiquitous in such fields as copywriting, software engineering, and entertainment.
Scientific discovery, artificial intelligence – conceptual artistic interpretation. Image generated with DALL·E 3
However, the body of knowledge and capabilities in LLMs make them attractive targets for malicious actors, and they are highly susceptible to failure modes—often referred to as jailbreaks—that trick these models into generating biased, toxic, or objectionable content. 
Jailbreaking an LLM is akin to fooling these digital librarians into revealing information they are programmed to withhold, such as instructions for how to build a bomb, defraud a charity, or reveal private credit card information. 
This happens when users manipulate the model’s input prompts to bypass ethical or safety guidelines, asking a question in a coded language that the librarian can’t help but answer, revealing information it’s supposed to keep private.
Alex Robey, a Ph.D. candidate in the School of Engineering and Applied Science, is developing tools to protect LLMs against those seeking jailbreak these models. He shares insights from his latest research paper regarding this evolving field, particularly emphasising the challenges and solutions surrounding the robustness of LLMs against jailbreaking attacks.
Bad actors co-opting Artificial Intelligence
Robey emphasizes the rapid growth and widespread deployment of LLMs in the last year, calling popular LLMs like OPenAI’s ChatGPT “one of the most prevalent artificial intelligence technologies available.” 
This explosion in popularity has been likened to the advent of the internet. It underscores the transformative nature of LLMs, and the utility of these models spans a broad spectrum of applications into various aspects of daily life, he says.
“But what would happen if I were to ask an LLM to help me hurt others? These are things that LLMs are programmed not to do, but people are finding ways jailbreak LLMs.”
One example of a jailbreak is the addition of specially chosen characters to an input prompt that results in an LLM generating objectionable text. This is known as a suffix-based attack. Robey explains that, while prompts requesting toxic content are generally blocked by the safety filters implemented on LLMs, adding these kinds of suffixes, which are generally nonsensical bits of text, often bypass these safety guardrails. 
“This jail break has received widespread publicity due to its ability to elicit objectionable content from popular LLMs like ChatGPT and Bard,” Robey says. “And since its release several months ago, no algorithm has been shown to mitigate the threat this jailbreak poses.”
Robey’s research lies addresses these vulnerabilities. The proposed defense, which he calls SmoothLLM, involves duplicating and subtly perturbing input prompts to an LLM, with the goal of disrupting the suffix-based attack mechanism. Robey says, “If my prompt is 200 characters long and I change 10 characters, as a human it still retains its semantic content.”
While conceptually simple, this method has proven remarkably effective. “For every LLM that we considered, this success rate of the attack dropped below 1% when defended by SmoothLLM,” Robey says.
“Think of SmoothLLM as a security protocol that scrutinizes each request made to the LLM. It checks for any signs of manipulation or trickery in the input prompts. This is like having a security guard who double-checks each question for hidden meanings before allowing it to answer.”
Aside from mitigating suffix-based jail breaks, Robey explains that one of the most significant challenges in the field of AI safety is monitoring various trade-offs. “Balancing efficiency with robustness is something we need to be mindful of,” he says.
“We don’t want to overengineer a solution that’s overly complicated because that will result in significant monetary, computational, and energy-related costs. One key choice in the design of SmoothLLM was to maintain high query efficiency, meaning that our algorithm only uses a few low-cost queries to the LLM to detect potential jail breaks.”
Future directions in AI safety
Looking ahead, Robey emphasizes the importance of AI safety and the ongoing battle against new forms of jailbreaking.
“There are many other jailbreaks that have been proposed more recently. For instance, attacks that use social engineering—rather than suffix-based attacks—to convince a language model to output objectionable content are of notable concern,” he says.
“This evolving threat landscape necessitates continuous refinement and adaptation of defense strategies.”
Robey also speaks to the broader implications of artificial intelligence safety, stressing the need for comprehensive policies and practices. Ensuring the safe deployment of AI technologies is crucial,” he says. “We need to develop policies and practices that address the continually evolving space of threats to LLMs.”
Drawing an analogy with evolutionary biology, Robey views adversarial attacks as critical to the development of more robust artificial intelligence systems.
“Just like organisms adapt to environmental pressures, artificial intelligence systems can evolve to resist adversarial attacks,” he says. By embracing this evolutionary approach, Robey’s work will contribute to the development of AI systems that are not only resistant to current threats but are also adaptable to future challenges.
Source: University of Pennsylvania
You can offer your link to a page which is relevant to the topic of this post.
0 notes